The Flickr30k dataset has become a standard benchmark for sentence-based image description. This paper presents Flickr30k Entities, which augments the 158k captions from Flickr30k with 244k coreference chains, linking mentions of the same entities across different captions for the same image, and associating them with 276k manually annotated bounding boxes. Such annotations are essential for continued progress in automatic image description and grounded language understanding. They enable us to define a new benchmark for localization of textual entity mentions in an image. We present a strong baseline for this task that combines an image-text embedding, detectors for common objects, a color classifier, and a bias towards selecting larger objects. While our baseline rivals in accuracy more complex state-of-the-art models, we show that its gains cannot be easily parlayed into improvements on such tasks as image-sentence retrieval, thus underlining the limitations of current methods and the need for further research.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
Obstacles on the sidewalk often block the path, limiting passage and resulting in frustration and wasted time, especially for citizens and visitors who use assistive devices (wheelchairs, walkers, strollers, canes, etc). To enable equal participation and use of the city, all citizens should be able to perform and complete their daily activities in a similar amount of time and effort. Therefore, we aim to offer accessibility information regarding sidewalks, so that citizens can better plan their routes, and to help city officials identify the location of bottlenecks and act on them. In this paper we propose a novel pipeline to estimate obstacle-free sidewalk widths based on 3D point cloud data of the city of Amsterdam, as the first step to offer a more complete set of information regarding sidewalk accessibility.
translated by 谷歌翻译
由于合奏数据集的较大尺寸和多元和时间特征,可视化集成模拟的不确定性是具有挑战性的。研究合奏不确定性的一种流行方法是分析水平集的位置不确定性。概率游行立方体是一种对多变量高斯噪声分布​​进行蒙特卡洛采样的技术,以实现位置不确定性可视化水平集。但是,该技术遭受了较高的计算时间,因此无法实现交互式可视化和分析。本文介绍了一种基于深度学习的方法,用于学习具有多元高斯噪声假设的二维集合数据的级别不确定性。我们使用工作流程中的时变集成数据的前几个时间步骤训练模型。我们证明,我们训练的模型可以准确地渗透新的时间步骤的不确定性,并且比具有串行计算的原始概率模型的速度快于170倍,并且比原始平行计算快10倍。
translated by 谷歌翻译
语言模型既展示了定量的改进,又展示了新的定性功能,随着规模的增加。尽管它们具有潜在的变革性影响,但这些新能力的特征却很差。为了为未来的研究提供信息,为破坏性的新模型能力做准备,并改善社会有害的效果,至关重要的是,我们必须了解目前和近乎未来的能力和语言模型的局限性。为了应对这一挑战,我们介绍了超越模仿游戏基准(Big Bench)。 Big Bench目前由204个任务组成,由132家机构的442位作者贡献。任务主题是多样的,从语言学,儿童发展,数学,常识性推理,生物学,物理学,社会偏见,软件开发等等。 Big-Bench专注于被认为超出当前语言模型的功能的任务。我们评估了OpenAI的GPT型号,Google内部密集变压器体系结构和大型基础上的开关稀疏变压器的行为,跨越了数百万到数十亿个参数。此外,一个人类专家评估者团队执行了所有任务,以提供强大的基准。研究结果包括:模型性能和校准都随规模改善,但绝对的术语(以及与评估者的性能相比);在模型类中的性能非常相似,尽管带有稀疏性。逐渐和预测的任务通常涉及大量知识或记忆成分,而在临界规模上表现出“突破性”行为的任务通常涉及多个步骤或组成部分或脆性指标;社交偏见通常会随着含糊不清的环境而随着规模而增加,但这可以通过提示来改善。
translated by 谷歌翻译
The potential for complex systems to exhibit tipping points in which an equilibrium state undergoes a sudden and often irreversible shift is well established, but prediction of these events using standard forecast modeling techniques is quite difficult. This has led to the development of an alternative suite of methods that seek to identify signatures of critical phenomena in data, which are expected to occur in advance of many classes of dynamical bifurcation. Crucially, the manifestations of these critical phenomena are generic across a variety of systems, meaning that data-intensive deep learning methods can be trained on (abundant) synthetic data and plausibly prove effective when transferred to (more limited) empirical data sets. This paper provides a proof of concept for this approach as applied to lattice phase transitions: a deep neural network trained exclusively on 2D Ising model phase transitions is tested on a number of real and simulated climate systems with considerable success. Its accuracy frequently surpasses that of conventional statistical indicators, with performance shown to be consistently improved by the inclusion of spatial indicators. Tools such as this may offer valuable insight into climate tipping events, as remote sensing measurements provide increasingly abundant data on complex geospatially-resolved Earth systems.
translated by 谷歌翻译
用于探索美国国家航空航天局的搜索工具(广告)可以相当丰富和赋予(例如,类似和趋势的运营商),但研究人员尚未允许完全杠杆语义搜索。例如,对“普朗克任务的结果”查询应该能够区分普朗克(人,任务,常量,机构和更多)的所有各种含义,而无需从用户进一步澄清。在广告中,我们正在将现代机器学习和自然语言处理技术应用于我们最近的天文出版物的数据集,以培训Astrobert,这是一种基于Google研究的深刻语境语言模型。使用AstrBert,我们的目标是丰富广告数据集并提高其可发现性,特别是我们正在开发自己的命名实体识别工具。我们在这里展示我们初步的结果和经验教训。
translated by 谷歌翻译
天文学家通常已经着手通过从头开始创建自己的表示来解决监督的机器学习问题。我们表明,经过训练的深度学习模型,可以回答每个星系动物园贴花问题问题,即学习星系的有意义的语义表示,这些语义表示对于从未训练过的新任务很有用。我们利用这些表示形式优于最近对研究大型星系样本至关重要的实际任务的方法。第一个任务是识别与查询星系相似的形态的星系。给定一个星系为人类分配了一个免费文本标签(例如“ #diffuse”),我们可以找到与大多数标签匹配该标签的星系。第二个任务是确定特定研究人员最有趣的异常。我们的方法在识别最有趣的100个异常(由Galaxy Zoo 2志愿者判断)方面是100%准确的。第三个任务是调整模型来仅使用少数新标记的星系解决新任务。与从陆地图像(ImageNet)或从头开始训练的模型相比,从我们的表示形式进行微调的模型可以更好地识别环形星系。我们用很少的新标签解决每个任务;一个(用于相似性搜索)或数百个(用于异常检测或微调)。这挑战了长期以来的观点,即深度监督方法需要新的大型标签数据集,以便在天文学中实际使用。为了帮助社区受益于我们验证的模型,我们发布了我们的微调代码Zoobot。没有先前经验的研究人员可以访问Zoobot。
translated by 谷歌翻译
Quantum computers promise to enhance machine learning for practical applications. Quantum machine learning for real-world data has to handle extensive amounts of high-dimensional data. However, conventional methods for measuring quantum kernels are impractical for large datasets as they scale with the square of the dataset size. Here, we measure quantum kernels using randomized measurements. The quantum computation time scales linearly with dataset size and quadratic for classical post-processing. While our method scales in general exponentially in qubit number, we gain a substantial speed-up when running on intermediate-sized quantum computers. Further, we efficiently encode high-dimensional data into quantum computers with the number of features scaling linearly with the circuit depth. The encoding is characterized by the quantum Fisher information metric and is related to the radial basis function kernel. Our approach is robust to noise via a cost-free error mitigation scheme. We demonstrate the advantages of our methods for noisy quantum computers by classifying images with the IBM quantum computer. To achieve further speedups we distribute the quantum computational tasks between different quantum computers. Our method enables benchmarking of quantum machine learning algorithms with large datasets on currently available quantum computers.
translated by 谷歌翻译
Clinical diagnostic and treatment decisions rely upon the integration of patient-specific data with clinical reasoning. Cancer presents a unique context that influence treatment decisions, given its diverse forms of disease evolution. Biomedical imaging allows noninvasive assessment of disease based on visual evaluations leading to better clinical outcome prediction and therapeutic planning. Early methods of brain cancer characterization predominantly relied upon statistical modeling of neuroimaging data. Driven by the breakthroughs in computer vision, deep learning became the de facto standard in the domain of medical imaging. Integrated statistical and deep learning methods have recently emerged as a new direction in the automation of the medical practice unifying multi-disciplinary knowledge in medicine, statistics, and artificial intelligence. In this study, we critically review major statistical and deep learning models and their applications in brain imaging research with a focus on MRI-based brain tumor segmentation. The results do highlight that model-driven classical statistics and data-driven deep learning is a potent combination for developing automated systems in clinical oncology.
translated by 谷歌翻译